RESUMEN
Polarization is common in politics and public opinion. It is believed to be shaped by media as well as ideologies, and often incited by misinformation. However, little is known about the microscopic dynamics behind polarization and the resulting social tensions. By coupling opinion formation with the strategy selection in different social dilemmas, we reveal how success at an individual level transforms to global consensus or lack thereof. When defection carries with it the fear of punishment in the absence of greed, as in the stag-hunt game, opinion fragmentation is the smallest. Conversely, if defection promises a higher payoff and also evokes greed, like in the prisoner's dilemma and snowdrift game, consensus is more difficult to attain. Our research thus challenges the top-down narrative of social tensions, showing they might originate from fundamental principles at individual level, like the desire to prevail in pairwise evolutionary comparisons.
RESUMEN
Even though the Theory of Mind in upper primates has been under investigation for decades, how it may evolve remains an open problem. We propose here an evolutionary game theoretical model where a finite population of individuals may use reasoning strategies to infer a response to the anticipated behavior of others within the context of a sequential dilemma, i.e., the Centipede Game. We show that strategies with bounded reasoning evolve and flourish under natural selection, provided they are allowed to make reasoning mistakes and a temptation for higher future gains is in place. We further show that non-deterministic reasoning co-evolves with an optimism bias that may lead to the selection of new equilibria, closely associated with average behavior observed in experimental data. This work reveals both a novel perspective on the evolution of bounded rationality and a co-evolutionary link between the evolution of Theory of Mind and the emergence of misbeliefs.
RESUMEN
The difficulties associated with solving Humanity's major global challenges have increasingly led world leaders and everyday citizens to publicly adopt strong emotional responses, with either mixed or unknown impacts on others' actions. Here, we present two experiments showing that non-verbal emotional expressions in group interactions play a critical role in determining how individuals behave when contributing to public goods entailing future and uncertain returns. Participants' investments were not only shaped by emotional expressions but also enhanced by anger when compared with joy. Our results suggest that global coordination may benefit from interaction in which emotion expressions can be paramount.
RESUMEN
Herding behavior has a social cost for individuals not following the herd, influencing human decision-making. This work proposes including a social cost derived from herding mentality into the payoffs of pairwise game interactions. We introduce a co-evolutionary asymmetric model with four individual strategies (cooperation vs. defection and herding vs. non-herding) to understand the co-emergence of herding behavior and cooperation. Computational experiments show how including herding costs promotes cooperation by increasing the parameter space under which cooperation persists. Results demonstrate a synergistic relationship between the emergence of cooperation and herding mentality: the highest cooperation is achieved when the herding mentality also achieves its highest level. Finally, we study different herding social costs and its relationship to cooperation and herding evolution. This study points to new social mechanisms, related to conformity-driven imitation behavior, that help to understand how and why cooperation prevails in human groups.
RESUMEN
Humans have developed considerable machinery used at scale to create policies and to distribute incentives, yet we are forever seeking ways in which to improve upon these, our institutions. Especially when funding is limited, it is imperative to optimise spending without sacrificing positive outcomes, a challenge which has often been approached within several areas of social, life and engineering sciences. These studies often neglect the availability of information, cost restraints or the underlying complex network structures, which define real-world populations. Here, we have extended these models, including the aforementioned concerns, but also tested the robustness of their findings to stochastic social learning paradigms. Akin to real-world decisions on how best to distribute endowments, we study several incentive schemes, which consider information about the overall population, local neighbourhoods or the level of influence which a cooperative node has in the network, selectively rewarding cooperative behaviour if certain criteria are met. Following a transition towards a more realistic network setting and stochastic behavioural update rule, we found that carelessly promoting cooperators can often lead to their downfall in socially diverse settings. These emergent cyclic patterns not only damage cooperation, but also decimate the budgets of external investors. Our findings highlight the complexity of designing effective and cogent investment policies in socially diverse populations.
RESUMEN
Evolutionary Game Theory (EGT) provides an important framework to study collective behavior. It combines ideas from evolutionary biology and population dynamics with the game theoretical modeling of strategic interactions. Its importance is highlighted by the numerous high level publications that have enriched different fields, ranging from biology to social sciences, in many decades. Nevertheless, there has been no open source library that provided easy, and efficient, access to these methods and models. Here, we introduce EGTtools, an efficient hybrid C++/Python library which provides fast implementations of both analytical and numerical EGT methods. EGTtools is able to analytically evaluate a system based on the replicator dynamics. It is also able to evaluate any EGT problem resorting to finite populations and large-scale Markov processes. Finally, it resorts to C++ and MonteCarlo simulations to estimate many important indicators, such as stationary or strategy distributions. We illustrate all these methodologies with concrete examples and analysis.
RESUMEN
Home assistant chat-bots, self-driving cars, drones or automated negotiation systems are some of the several examples of autonomous (artificial) agents that have pervaded our society. These agents enable the automation of multiple tasks, saving time and (human) effort. However, their presence in social settings raises the need for a better understanding of their effect on social interactions and how they may be used to enhance cooperation towards the public good, instead of hindering it. To this end, we present an experimental study of human delegation to autonomous agents and hybrid human-agent interactions centered on a non-linear public goods dilemma with uncertain returns in which participants face a collective risk. Our aim is to understand experimentally whether the presence of autonomous agents has a positive or negative impact on social behaviour, equality and cooperation in such a dilemma. Our results show that cooperation and group success increases when participants delegate their actions to an artificial agent that plays on their behalf. Yet, this positive effect is less pronounced when humans interact in hybrid human-agent groups, where we mostly observe that humans in successful hybrid groups make higher contributions earlier in the game. Also, we show that participants wrongly believe that artificial agents will contribute less to the collective effort. In general, our results suggest that delegation to autonomous agents has the potential to work as commitment devices, which prevent both the temptation to deviate to an alternate (less collectively good) course of action, as well as limiting responses based on betrayal aversion.
Asunto(s)
Altruismo , Conducta Cooperativa , Automatización , Teoría del Juego , Humanos , Motivación , Conducta SocialRESUMEN
Regulation of advanced technologies such as Artificial Intelligence (AI) has become increasingly important, given the associated risks and apparent ethical issues. With the great benefits promised from being able to first supply such technologies, safety precautions and societal consequences might be ignored or shortchanged in exchange for speeding up the development, therefore engendering a racing narrative among the developers. Starting from a game-theoretical model describing an idealised technology race in a fully connected world of players, here we investigate how different interaction structures among race participants can alter collective choices and requirements for regulatory actions. Our findings indicate that, when participants portray a strong diversity in terms of connections and peer-influence (e.g., when scale-free networks shape interactions among parties), the conflicts that exist in homogeneous settings are significantly reduced, thereby lessening the need for regulatory actions. Furthermore, our results suggest that technology governance and regulation may profit from the world's patent heterogeneity and inequality among firms and nations, so as to enable the design and implementation of meticulous interventions on a minority of participants, which is capable of influencing an entire population towards an ethical and sustainable use of advanced technologies.
RESUMEN
The spread of COVID-19 and ensuing containment measures have accentuated the profound interdependence among nations or regions. This has been particularly evident in tourism, one of the sectors most affected by uncoordinated mobility restrictions. The impact of this interdependence on the tendency to adopt less or more restrictive measures is hard to evaluate, more so if diversity in economic exposures to citizens' mobility are considered. Here, we address this problem by developing an analytical and computational game-theoretical model encompassing the conflicts arising from the need to control the economic effects of global risks, such as in the COVID-19 pandemic. The model includes the individual costs derived from severe restrictions imposed by governments, including the resulting economic interdependence among all the parties involved in the game. By using tourism-based data, the model is enriched with actual heterogeneous income losses, such that every player has a different economic cost when applying restrictions. We show that economic interdependence enhances cooperation because of the decline in the expected payoffs by free-riding parties (i.e., those neglecting the application of mobility restrictions). Furthermore, we show (analytically and through numerical simulations) that these cross-exposures can transform the nature of the cooperation dilemma each region or country faces, modifying the position of the fixed points and the size of the basins of attraction that characterize this class of games. Finally, our results suggest that heterogeneity among regions may be used to leverage the impact of intervention policies by ensuring an agreement among the most relevant initial set of cooperators.
RESUMEN
Indirect reciprocity (IR) is a key mechanism to understand cooperation among unrelated individuals. It involves reputations and complex information processing, arising from social interactions. By helping someone, individuals may improve their reputation, which may be shared in a population and change the predisposition of others to reciprocate in the future. The reputation of individuals depends, in turn, on social norms that define a good or bad action, offering a computational and mathematical appealing way of studying the evolution of moral systems. Over the years, theoretical and empirical research has unveiled many features of cooperation under IR, exploring norms with varying degrees of complexity and information requirements. Recent results suggest that costly reputation spread, interaction observability and empathy are determinants of cooperation under IR. Importantly, such characteristics probably impact the level of complexity and information requirements for IR to sustain cooperation. In this review, we present and discuss those recent results. We provide a synthesis of theoretical models and discuss previous conclusions through the lens of evolutionary game theory and cognitive complexity. We highlight open questions and suggest future research in this domain. This article is part of the theme issue 'The language of cooperation: reputation and honest signalling'.
Asunto(s)
Conducta Cooperativa , Modelos Psicológicos , Evolución Biológica , Teoría del Juego , Humanos , Principios Morales , Normas SocialesRESUMEN
What humans do when exposed to uncertainty, incomplete information, and a dynamic environment influenced by other agents remains an open scientific challenge with important implications in both science and engineering applications. In these contexts, humans handle social situations by employing elaborate cognitive mechanisms such as theory of mind and risk sensitivity. Here we resort to a novel theoretical model, showing that both mechanisms leverage coordinated behaviors among self-regarding individuals. Particularly, we resort to cumulative prospect theory and level-k recursions to show how biases towards optimism and the capacity of planning ahead significantly increase coordinated, cooperative action. These results suggest that the reason why humans are good at coordination may stem from the fact that we are cognitively biased to do so.
Asunto(s)
Toma de Decisiones/fisiología , Modelos Biológicos , Desempeño Psicomotor/fisiología , Teoría de la Mente/fisiología , Biología Computacional , Humanos , Riesgo , Conducta Social , IncertidumbreRESUMEN
The exploration of different behaviours is part of the adaptation repertoire of individuals to new environments. Here, we explore how the evolution of cooperative behaviour is affected by the interplay between exploration dynamics and social learning, in particular when individuals engage on prisoner's dilemma along the edges of a social network. We show that when the population undergoes a transition from strong to weak exploration rates a decline in the overall levels of cooperation is observed. However, if the rate of decay is lower in highly connected individuals (Leaders) than for the less connected individuals (Followers) then the population is able to achieve higher levels of cooperation. Finally, we show that minor differences in selection intensities (the degree of determinism in social learning) and individual exploration rates, can translate into major differences in the observed collective dynamics.
RESUMEN
The emergence of pro-social behaviors remains a key open challenge across disciplines. In this context, there is growing evidence that expressing emotions may foster human cooperation. However, it remains unclear how emotions shape individual choices and interact with other cooperation mechanisms. Here, we provide a comprehensive experimental analysis of the interplay of emotion expressions with two important mechanisms: direct and indirect reciprocity. We show that cooperation in an iterated prisoner's dilemma emerges from the combination of the opponent's initial reputation, past behaviors, and emotion expressions. Moreover, all factors influenced the social norm adopted when assessing the action of others - i.e., how their counterparts' reputations are updated - thus, reflecting longer-term consequences. We expose a new class of emotion-based social norms, where emotions are used to forgive those that defect but also punish those that cooperate. These findings emphasize the importance of emotion expressions in fostering, directly and indirectly, cooperation in society.
RESUMEN
The field of Artificial Intelligence (AI) is going through a period of great expectations, introducing a certain level of anxiety in research, business and also policy. This anxiety is further energised by an AI race narrative that makes people believe they might be missing out. Whether real or not, a belief in this narrative may be detrimental as some stake-holders will feel obliged to cut corners on safety precautions, or ignore societal consequences just to "win". Starting from a baseline model that describes a broad class of technology races where winners draw a significant benefit compared to others (such as AI advances, patent race, pharmaceutical technologies), we investigate here how positive (rewards) and negative (punishments) incentives may beneficially influence the outcomes. We uncover conditions in which punishment is either capable of reducing the development speed of unsafe participants or has the capacity to reduce innovation through over-regulation. Alternatively, we show that, in several scenarios, rewarding those that follow safety measures may increase the development speed while ensuring safe choices. Moreover, in the latter regimes, rewards do not suffer from the issue of over-regulation as is the case for punishment. Overall, our findings provide valuable insights into the nature and kinds of regulatory actions most suitable to improve safety compliance in the contexts of both smooth and sudden technological shifts.
Asunto(s)
Inteligencia Artificial , Creatividad , Humanos , Motivación , Castigo , Recompensa , TecnologíaRESUMEN
Social dilemmas are often shaped by actions involving uncertain returns only achievable in the future, such as climate action or voluntary vaccination. In this context, uncertainty may produce non-trivial effects. Here, we assess experimentally - through a collective risk dilemma - the effect of timing uncertainty, i.e. how uncertainty about when a target needs to be reached affects the participants' behaviors. We show that timing uncertainty prompts not only early generosity but also polarized outcomes, where participants' total contributions are distributed unevenly. Furthermore, analyzing participants' behavior under timing uncertainty reveals an increase in reciprocal strategies. A data-driven game-theoretical model captures the self-organizing dynamics underpinning these behavioral patterns. Timing uncertainty thus casts a shadow on the future that leads participants to respond early, whereas reciprocal strategies appear to be important for group success. Yet, the same uncertainty also leads to inequity and polarization, requiring the inclusion of new incentives handling these societal issues.
RESUMEN
Many biological and social systems show significant levels of collective action. Several cooperation mechanisms have been proposed, yet they have been mostly studied independently. Among these, direct reciprocity supports cooperation on the basis of repeated interactions among individuals. Signals and quorum dynamics may also drive cooperation. Here, we resort to an evolutionary game-theoretical model to jointly analyse these two mechanisms and study the conditions in which evolution selects for direct reciprocity, signalling, or their combination. We show that signalling alone leads to higher levels of cooperation than when combined with reciprocity, while offering additional robustness against errors. Specifically, successful strategies in the realm of direct reciprocity are often not selected in the presence of signalling, and memory of past interactions is only exploited opportunistically in the case of earlier coordination failure. Differently, signalling always evolves, even when costly. In the light of these results, it may be easier to understand why direct reciprocity has been observed only in a limited number of cases among non-humans, whereas signalling is widespread at all levels of complexity.
Asunto(s)
Conducta Cooperativa , Teoría del Juego , Evolución Biológica , Memoria , Modelos TeóricosRESUMEN
Multiplayer games have long been used as testbeds in artificial intelligence research, aptly referred to as the Drosophila of artificial intelligence. Traditionally, researchers have focused on using well-known games to build strong agents. This progress, however, can be better informed by characterizing games and their topological landscape. Tackling this latter question can facilitate understanding of agents and help determine what game an agent should target next as part of its training. Here, we show how network measures applied to response graphs of large-scale games enable the creation of a landscape of games, quantifying relationships between games of varying sizes and characteristics. We illustrate our findings in domains ranging from canonical games to complex empirical games capturing the performance of trained agents pitted against one another. Our results culminate in a demonstration leveraging this information to generate new and interesting games, including mixtures of empirical games synthesized from real world games.
RESUMEN
Ensuring global cooperation often poses governance problems shadowed by the tragedy of the commons, as wrong-doers enjoy the benefits set up by right-doers at no cost. Institutional punishment of wrong-doers is well-known to curtail their impetus as free-riders. However, institutions often have limited scope in imposing sanctions, more so when these are strict and potentially viewed as disproportionate. Inspired by the design principles proposed by the late Nobel Prize Elinor Ostrom, here we study the evolution and impact of a new form of institutional sanctioning, where punishment is graduated, growing with the incidence of free-riding. We develop an analytical model capable of identifying the conditions under which this design principle is conducive to the self-organization of stable institutions and cooperation. We employ evolutionary game theory in finite populations and non-linear public goods dilemmas in the presence of risk of global losses whose solution requires the self-organization of decision makers into an overall cooperative state. We show that graduated punishment is more effective in promoting widespread cooperation than conventional forms of punishment studied to date, being also less severe and thus, presumably, easier to implement. This effect is enhanced whenever the costs of its implementation are positively correlated with the severity of punishment. We frame our model within the context of the global reduction of carbon emissions, but the results are shown to be general enough to be applicable to other collective action problems, shedding further light into the origins of Human institutions.
Asunto(s)
Conducta Cooperativa , Castigo , Carbono , Teoría del Juego , Humanos , Sistemas de LecturaRESUMEN
Mitigating climate change effects involves strategic decisions by individuals that may choose to limit their emissions at a cost. Everyone shares the ensuing benefits and thereby individuals can free ride on the effort of others, which may lead to the tragedy of the commons. For this reason, climate action can be conveniently formulated in terms of Public Goods Dilemmas often assuming that a minimum collective effort is required to ensure any benefit, and that decision-making may be contingent on the risk associated with future losses. Here we investigate the impact of reward and punishment in this type of collective endeavors - coined as collective-risk dilemmas - by means of a dynamic, evolutionary approach. We show that rewards (positive incentives) are essential to initiate cooperation, mostly when the perception of risk is low. On the other hand, we find that sanctions (negative incentives) are instrumental to maintain cooperation. Altogether, our results are gratifying, given the a-priori limitations of effectively implementing sanctions in international agreements. Finally, we show that whenever collective action is most challenging to succeed, the best results are obtained when both rewards and sanctions are synergistically combined into a single policy.
RESUMEN
When an individual makes a judgement about the actions of another individual, taking the latter's viewpoint into consideration enhances cooperation in society at large.